correct version
Advancing Student Writing Through Automated Syntax Feedback
Zeinalipour, Kamyar, Mehak, Mehak, Parsamotamed, Fatemeh, Maggini, Marco, Gori, Marco
This study underscores the pivotal role of syntax feedback in augmenting the syntactic proficiency of students. Recognizing the challenges faced by learners in mastering syntactic nuances, we introduce a specialized dataset named Essay-Syntax-Instruct designed to enhance the understanding and application of English syntax among these students. Leveraging the capabilities of Large Language Models (LLMs) such as GPT3.5-Turbo, Llama-2-7b-chat-hf, Llama-2-13b-chat-hf, and Mistral-7B-Instruct-v0.2, this work embarks on a comprehensive fine-tuning process tailored to the syntax improvement task. Through meticulous evaluation, we demonstrate that the fine-tuned LLMs exhibit a marked improvement in addressing syntax-related challenges, thereby serving as a potent tool for students to identify and rectify their syntactic errors. The findings not only highlight the effectiveness of the proposed dataset in elevating the performance of LLMs for syntax enhancement but also illuminate a promising path for utilizing advanced language models to support language acquisition efforts. This research contributes to the broader field of language learning technology by showcasing the potential of LLMs in facilitating the linguistic development of Students.
- Education > Curriculum > Subject-Specific Education (1.00)
- Education > Educational Technology > Educational Software (0.71)
Use NVIDIA + Docker + VScode + PyTorch for Machine Learning
Everybody hates installing NVIDIA drivers, you have to manually download them, then install cuda, be sure to have the correct version of everything, and change them from time to time to be updated. From Ubuntu 20.02, the drivers will be automatically installed by the OS. That's great, but you lose control over them. Maybe you need a specific version, or your code only works with cuda 10. In that case, well things may get dirty.
Running TensorFlow 2.2 in Azure Machine Learning Studio
One challenge with keeping up with bleeding edge AI and ML frameworks is that the changes can easily outpace APIs and SDKs build on top of these tools. Fortunately, since these tools are largely open-source and made available through industry common package managers, these updates can be used before the frameworks support them, albeit with some workarounds. Azure Machine Learning Studio (AzureML) is a platform service that provides full end to end management of Machine Learning and Data Science workloads. The tool provides experiment tracking, data set management, model repository, and deployment services to enable data scientists to train/validate/deploy their models in a cloud scale environment. To run TensorFlow 2.2 in AzureML is really pretty trivial, given the existing support for the TensorFlow framework.
Combining a Context Aware Neural Network with a Denoising Autoencoder for Measuring String Similarities
Lazreg, Mehdi Ben, Goodwin, Morten
Measuring similarities between strings is central for many established and fast growing research areas including information retrieval, biology, and natural language processing. The traditional approach for string similarity measurements is to define a metric over a word space that quantifies and sums up the differences between characters in two strings. The state-of-the-art in the area has, surprisingly, not evolved much during the last few decades. The majority of the metrics are based on a simple comparison between character and character distributions without consideration for the context of the words. This paper proposes a string metric that encompasses similarities between strings based on (1) the character similarities between the words including. Non-Standard and standard spellings of the same words, and (2) the context of the words. Our proposal is a neural network composed of a denoising autoencoder and what we call a context encoder specifically designed to find similarities between the words based on their context. The experimental results show that the resulting metrics succeeds in 85.4\% of the cases in finding the correct version of a non-standard spelling among the closest words, compared to 63.2\% with the established Normalised-Levenshtein distance. Besides, we show that words used in similar context are with our approach calculated to be similar than words with different contexts, which is a desirable property missing in established string metrics.
saiprashanths/dl-setup
A detailed guide to setting up your machine for deep learning research. Includes instructions to install drivers, tools and various deep learning frameworks. This was tested on a 64 bit machine with Nvidia Titan X, running Ubuntu 14.04 There are several great guides with a similar goal. Some are limited in scope, while others are not up to date.
saiprashanths/dl-setup
A detailed guide to setting up your machine for deep learning research. Includes instructions to install drivers, tools and various deep learning frameworks. This was tested on a 64 bit machine with Nvidia Titan X, running Ubuntu 14.04 There are several great guides with a similar goal. Some are limited in scope, while others are not up to date.